16 research outputs found

    Enabling sharing and reuse of scientific data

    Get PDF
    The purpose of this study was to develop an understanding of the current state of scientific data sharing that stakeholders could use to develop and implement effective data sharing strategies and policies. The study developed a conceptual model to describe the process of data sharing, and the drivers, barriers, and enablers that determine stakeholder engagement. The conceptual model was used as a framework to structure discussions and interviews with key members of all stakeholder groups. Analysis of data obtained from interviewees identified a number of themes that highlight key requirements for the development of a mature data sharing culture

    First results of the SOAP project. Open access publishing in 2010

    Full text link
    The SOAP (Study of Open Access Publishing) project has compiled data on the present offer for open access publishing in online peer-reviewed journals. Starting from the Directory of Open Access Journals, several sources of data are considered, including inspection of journal web site and direct inquiries within the publishing industry. Several results are derived and discussed, together with their correlations: the number of open access journals and articles; their subject area; the starting date of open access journals; the size and business models of open access publishers; the licensing models; the presence of an impact factor; the uptake of hybrid open access.Comment: Submitted to PLoS ON

    Highlights from the SOAP project survey. What Scientists Think about Open Access Publishing

    Full text link
    The SOAP (Study of Open Access Publishing) project has run a large-scale survey of the attitudes of researchers on, and the experiences with, open access publishing. Around forty thousands answers were collected across disciplines and around the world, showing an overwhelming support for the idea of open access, while highlighting funding and (perceived) quality as the main barriers to publishing in open access journals. This article serves as an introduction to the survey and presents this and other highlights from a preliminary analysis of the survey responses. To allow a maximal re-use of the information collected by this survey, the data are hereby released under a CC0 waiver, so to allow libraries, publishers, funding agencies and academics to further analyse risks and opportunities, drivers and barriers, in the transition to open access publishing.Comment: Data manual available at http://bit.ly/gI8nct Compressed CSV data file available at http://bit.ly/gSmm71 Alternative data formats: CSV http://bit.ly/ejuvKO XLS http://bit.ly/e6gE7o XLSX http://bit.ly/gTjyv

    Interoperability and Services Through Shared Identifiers

    No full text
    Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014Posters, Demos and Developer "How-To's"Shared identifier infrastructures, such as DOIs for research objects and ORCiDs for persons, can improve interoperability, allow for increased efficiency of internal processes, and facilitate the development of innovative third­-party services. DataCite and ORCID technical staff will provide a technical walk­through to introduce developers to the current APIs. The EC­-funded ODIN project, which builds on the ORCID and DataCite initiatives to connect services and infrastructures, will fund a developer challenge in this regard.Dallmeier­-Tiessen, Suenje (CERN)Paglione, Laura (ORCID)Peters, Sebastian (DataCite)Scherle, Ryan (Dryad Digital Repository

    Promoting Interoperability and Services Through Shared Identifiers

    No full text
    Panel at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014General Track Papers and PanelsThe session was recorded and is available for watchingThis panel proposes a technical discussion on the use of persistent identifiers as a support for interoperable research platforms and the services built on top of them. Speakers from ORCID and DataCite will discuss implementation challenges for persistent identifier registries, including community engagement and support for development of third party services. Dryad and CERN will focus on the challenges and opportunities for integrating shared persistent identifiers.Dallmeier-­Tiessen, Suenje (CERN)Paglione, Laura (ORCID)Peters, Sebastian (DataCite)Scherle, Ryan (Dryad Digital Repository

    Implementation and testing of an Authenticity Protocol on a Specific Domain.

    No full text
    In the original definition given in CASPAR, Authenticity Protocols (APs) are the procedures to be followed in order to assess the authenticity of specific type of Digital Resource (DR). The CASPAR definition is quite general and does not make reference to a specific authenticity management model. As part of the activities of APARSEN WP24 we have formalized an authenticity management model, which is based on the principle of performing controls and collecting authenticity evidence in connection to specific events of the DR lifecycle. This allows to trace back all the transformations the DR has undergone since its creation and that may have affected its authenticity. The model is complemented by a set of operational guidelines that allow to set up an Authenticity Management Policy, i.e. to identify the relevant transformations in the lifecycle and to specify which controls should be performed and which authenticity evidence should be collected in connection with these transformations. To formalize the policy we have indeed resorted to CASPAR's AP definition, and we have adapted and extended to integrate it in our authenticity management model. In our methodology the AP therefore becomes the procedure that is to be followed in connection with a given lifecycle event to perform the controls and to collect the AER as specified by the authenticity management policy. Accordingly, the original content of this deliverable, which was aimed at "implementing and testing an authenticity protocol on a specific domain", has been adapted and extended to encompass the whole scope of the authenticity evidence management guidelines. The current aim of the deliverable has therefore become to test the model and the guidelines at the operational level when dealing with the concrete problem of setting up or improving a LTDP repository in a given specific environment, to get to the definition of an adequate authenticity management policy. Moreover, instead of concentrating on a single environment, we have decided to extend the analysis to multiple test environments provided by APARSEN partners. Shifting to a practical ground and facing the actual problems that arise in the management of a repository has indeed been an important move to fill the gap that still divides the mostly theoretical results of the scientific community from the actual practices carried on in most repositories, and to reduce the fragmentation among the different approaches that prevents interoperability. And the case studies have proved the validity of this approach. On the one hand they have proved to be easily applied and well understood in all the test cases, and on the other hand the simple and yet rigorous concepts introduced by the model may provide a common ground for the management of authenticity evidence and for exchanging it among different systems. In at least one of the case studies, the guidelines have been applied to their full extent, i.e. from the preliminary analysis, to the identification of the relevant lifecycle events, to the detailed specification of the authenticity evidence to be collected, to the formal definition of the authenticity management policy, that is to the specification of the AP. In all cases, referring to the guidelines has provided valuable help, both in pointing out any weakness in the current practices and in providing a reasonable way to fix the problems

    Recommendations for Preservation Data Policies

    No full text
    Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014Posters, Demos and Developer "How-To's"In this paper, we summarise selected recommendations that should be taken into account when drawing up data policies concerning digital preservation. It is important to understand what current data policies address and if they miss out on important topics, such as specific requirements for data preservation. This gives an indication of the possible impact of such data policies on individual communities, for example for repository managers, and allows recommendations to be drawn up to guide forthcoming policies. The recommendations suggested in this paper are based on both desktop research on selected data policies and an online survey conducted by the APARSEN project during autumn 2013.Lehtonen, Juha (CSC - IT Center for Science, Finland)Helin, Heikki (CSC - IT Center for Science, Finland)Dallmeier-Tiessen, Suenje (CERN – European Organization for Nuclear Research, Switzerland)Guercio, Mariella (CINI – Consorzio Interuniversitario Nazionale per l’Informatica, Italy)Herterich, Patricia (CERN – European Organization for Nuclear Research, Switzerland)Kaur, Kirnn (British Library, United Kingdom)Lavasa, Artemis (CERN – European Organization for Nuclear Research, Switzerland)Salmivalli, Riina (CSC - IT Center for Science, Finland

    Helmholtz Open Science Briefing. Helmholtz Open Science Forum Forschungssoftware. Report

    No full text
    Das Helmholtz Forum Forschungssoftware, welches gemeinsam von der Task Group Forschungssoftware des AK Open Science und dem HIFIS Software Cluster getragen wird, veranstaltete am 7. April 2022 ein Helmholtz Open Science Forum zum Thema Forschungssoftware. Die Veranstaltung wurde vom Helmholtz Open Science Office organisiert. Das virtuelle Forum widmete sich drei Aspekten beim offenen und nachhaltigen Umgang mit Forschungssoftware in der Helmholtz-Gemeinschaft: Policy, Praxis sowie Infrastrukturen und Tools. Die Veranstaltung war die zweite einer Reihe von Helmholtz Open Science Foren zum Thema. Die erste Veranstaltung fand im Mai 2021 unter dem Titel „Policies für Forschungssoftware“ statt. Vorliegender Bericht dokumentiert die Veranstaltung
    corecore